16 research outputs found

    Reusing view-dependent animation

    Get PDF
    In this paper we present techniques for reusing view-dependent animation. First, we provide a framework for representing view-dependent animations. We formulate the concept of a view space, which is the space formed by the key views and their associated character poses. Tracing a path on the view space generates the corresponding view-dependent animation in real time. We then demonstrate that the framework can be used to synthesize new stylized animations by reusing view-dependent animations. We present three types of novel reuse techniques. In the first we show how to animate multiple characters from the same view space. Next, we show how to animate multiple characters from multiple view spaces. We use this technique to animate a crowd of characters. Finally, we draw inspiration from cubist paintings and create their view-dependent analogues by using different cameras to control different body parts of the same characte

    Self adaptive animation based on user perspective

    Get PDF
    In this paper we present a new character animation technique in which the animation adapts itself based on the change in the user's perspective, so that when the user moves and their point of viewing the animation changes, then the character animation adapts itself in response to that change. The resulting animation, generated in real-time, is a blend of key animations provided a priori by the animator. The blending is done with the help of efficient dual-quaternion transformation blending. The user's point of view is tracked using either computer vision techniques or a simple user-controlled input modality, such as mouse-based input. This tracked point of view is then used to suitably select the blend of animations. We show a way to author and use such animations in both virtual as well as augmented reality scenarios and demonstrate that it significantly heightens the sense of presence for the users when they interact with such self adaptive animations of virtual character

    Making Them Remember-Emotional Virtual Characters with Memory

    Get PDF
    The search for the perfect virtual character is on, but the moment users interact with characters, any illusion that we’ve found it is broken. Adding memory capabilities to models of human emotions, personality, and behavior traits is a step toward a more natural interaction style

    Camera-based Gaze Control for Virtual Characters

    No full text
    Virtual characters have become crucial to many interactive 3D graphical virtual environment simulations in the context of virtual reality and computer games. One of the primary concerns of the designers of such environments is to model the interaction of the virtual characters with the users. The gaze of the virtual character forms a chief component of this interaction. When the gaze of the virtual character is directed correctly towards the users, they perceive a sense of presence inside the virtual environment, during their interactions with the virtual character. We present, in this paper, a simple idea to control the gaze of a virtual character based on the viewpoint of the user involved in the interaction, suitable both for Virtual Reality as well as Augmented Reality applications. We also place the idea in the context of methods to control gaze that already exist in literature by providing a summary of the existing works. 1

    Reusing view-dependent animation

    No full text

    Camera-based gaze control for virtual characters

    No full text
    Virtual characters have become crucial to many interactive 3D graphical virtual environment simulations in the context of virtual reality and computer games. One of the primary concerns of the designers of such environments is to model the interaction of the virtual characters with the users. The gaze of the virtual character forms a chief component of this interaction. When the gaze of the virtual character is directed correctly towards the users, they perceive a sense of presence inside the virtual environment, during their interactions with the virtual character. We present, in this paper, a simple idea to control the gaze of a virtual character based on the viewpoint of the user involved in the interaction, suitable both for Virtual Reality as well as Augmented Reality applications. We also place the idea in the context of methods to control gaze that already exist in literature by providing a summary of the existing works

    Self adaptive animation based on user perspective

    No full text
    In this paper we present a new character animation technique in which the animation adapts itself based on the change in the user’s perspective, so that when the user moves and their point of viewing the animation changes, then the character animation adapts itself in response to that change. The resulting animation, generated in real-time, is a blend of key animations provided a priori by the animator. The blending is done with the help of efficient dual-quaternion transformation blending. The user’s point of view is tracked using either computer vision techniques or a simple user-controlled input modality, such as mouse-based input. This tracked point of view is then used to suitably select the blend of animations. We show a way to author and use such animations in both virtual as well as augmented reality scenarios and demonstrate that it significantly heightens the sense of presence for the users when they interact with such self adaptive animations of virtual characters

    Reusing view-dependent animation

    No full text
    In this paper we present techniques for reusing view-dependent animation. First, we provide a framework for representing view-dependent animations. We formulate the concept of a view space, which is the space formed by the key views and their associated character poses. Tracing a path on the view space generates the corresponding view-dependent animation in real time. We then demonstrate that the framework can be used to synthesize new stylized animations by reusing view-dependent animations. We present three types of novel reuse techniques. In the first we show how to animate multiple characters from the same view space. Next, we show how to animate multiple characters from multiple view spaces. We use this technique to animate a crowd of characters. Finally, we draw inspiration from cubist paintings and create their view-dependent analogues by using different cameras to control different body parts of the same character

    An Efficient Central Path Algorithm for Virtual Navigation

    No full text
    We give an efficient, scalable, and simple algorithm for computation of a central path for navigation in closed virtual environments. The algorithm requires less preprocessing and produces paths of high visual fidelity. The algorithm enables computing paths at multiple resolutions. The algorithm is based on a distance from boundary field computed on a hierarchical subdivision of the free space inside the closed 3D object. We also present a progressive version of our algorithm based on a local search strategy thus giving navigable paths in a localized region of interest

    Improved Interactive Reshaping of Humans in Images

    Get PDF
    In this paper, we present an interactive and flexible approach for realistic reshaping of human bodies in a single image. For reshaping, a user specifies a set of semantic attributes like weight and height. Then we use a 3Dmorphable model based image retouching technique for global reshaping of the human bodies in the image such that they satisfy the semantic constraints specified by the user. We address the problem of deformation of the environment surrounding the human body being reshaped, which produces visible artifacts, especially noticeable at regions with structural features, in prior work. We are able to separate the human figure from the background. This allows us to reshape the figure, while preserving the background. Missing regions in the background are inpainted in a manner that maintains structural details. We also provide a quantitative measure for distortion and compare our results with the prior work
    corecore